Cost-sensitive Boosting with p-norm Loss Functions and its Applications
نویسندگان
چکیده
In practical applications of classification, there are often varying costs associated with different types of misclassification (e.g. fraud detection, anomaly detection and medical diagnosis), motivating the need for the so-called ”cost-sensitive” classification. In this paper, we introduce a family of novel boosting methods for cost-sensitive classification by applying the theory of gradient boosting to p-norm based cost functionals, and establish theoretical guarantees as well as their empirical advantage over existing algorithms.
منابع مشابه
Guess-Averse Loss Functions For Cost-Sensitive Multiclass Boosting
Cost-sensitive multiclass classification has recently acquired significance in several applications, through the introduction of multiclass datasets with well-defined misclassification costs. The design of classification algorithms for this setting is considered. It is argued that the unreliable performance of current algorithms is due to the inability of the underlying loss functions to enforc...
متن کاملAppendix: Sharing Features in Multi-class Boosting via Group Sparsity
In this document we provide a complete derivation for multi-class boosting with group sparsity and a full explanation of admm algorithm presented in the main paper. 1 Multi-class boosting with group sparsity We first provide the derivation for multi-class logistic loss with 1,2-norm. We then show the difference between our boosting with 1,2-norm and 1,∞-norm. We then briefly discuss our group s...
متن کاملA Quartic Quality Loss Function and Its Properties
We propose a quartic function to represent a family of continuous quality loss functions. Depending on the choice of its parameters the shape of this function within the specification limits can be either symmetric or asymmetric, and it can be either similar to the ubiquitous quadratic loss function or somewhat closer to the conventional step function. We examine this family of loss functions i...
متن کاملDiscussion of Boosting Papers
We congratulate the authors for their interesting papers on boosting and related topics. Jiang deals with the asymptotic consistency of Adaboost. Lugosi and Vayatis study the convex optimization of loss functions associated with boosting. Zhang studies the loss functions themselves. Their results imply that boosting-like methods can reasonably be expected to converge to Bayes classifiers under ...
متن کاملThe operational matrix of fractional derivative of the fractional-order Chebyshev functions and its applications
In this paper, we introduce a family of fractional-order Chebyshev functions based on the classical Chebyshev polynomials. We calculate and derive the operational matrix of derivative of fractional order $gamma$ in the Caputo sense using the fractional-order Chebyshev functions. This matrix yields to low computational cost of numerical solution of fractional order differential equations to the ...
متن کامل